#machine learning chatgpt
Explore tagged Tumblr posts
edsonjnovaes · 5 months ago
Text
E a Adobe, hein?!? 1.2
A Adobe, dona de softwares como Photoshop, se envolveu em uma polêmica ao alterar seus “termos de uso” de forma a sugerir que poderia utilizar artes de seus usuários para treinar sua inteligência artificial. A mudança incomodou usuários, mas animou investidores, fazendo as ações da empresa (#ADBE) saltarem mais de 14% numa sexta-feira (14). Giulia Frazão – Monitor do mercado. 14 jun 2024 E a…
Tumblr media
View On WordPress
0 notes
river-taxbird · 3 months ago
Text
AI hasn't improved in 18 months. It's likely that this is it. There is currently no evidence the capabilities of ChatGPT will ever improve. It's time for AI companies to put up or shut up.
I'm just re-iterating this excellent post from Ed Zitron, but it's not left my head since I read it and I want to share it. I'm also taking some talking points from Ed's other posts. So basically:
We keep hearing AI is going to get better and better, but these promises seem to be coming from a mix of companies engaging in wild speculation and lying.
Chatgpt, the industry leading large language model, has not materially improved in 18 months. For something that claims to be getting exponentially better, it sure is the same shit.
Hallucinations appear to be an inherent aspect of the technology. Since it's based on statistics and ai doesn't know anything, it can never know what is true. How could I possibly trust it to get any real work done if I can't rely on it's output? If I have to fact check everything it says I might as well do the work myself.
For "real" ai that does know what is true to exist, it would require us to discover new concepts in psychology, math, and computing, which open ai is not working on, and seemingly no other ai companies are either.
Open ai has already seemingly slurped up all the data from the open web already. Chatgpt 5 would take 5x more training data than chatgpt 4 to train. Where is this data coming from, exactly?
Since improvement appears to have ground to a halt, what if this is it? What if Chatgpt 4 is as good as LLMs can ever be? What use is it?
As Jim Covello, a leading semiconductor analyst at Goldman Sachs said (on page 10, and that's big finance so you know they only care about money): if tech companies are spending a trillion dollars to build up the infrastructure to support ai, what trillion dollar problem is it meant to solve? AI companies have a unique talent for burning venture capital and it's unclear if Open AI will be able to survive more than a few years unless everyone suddenly adopts it all at once. (Hey, didn't crypto and the metaverse also require spontaneous mass adoption to make sense?)
There is no problem that current ai is a solution to. Consumer tech is basically solved, normal people don't need more tech than a laptop and a smartphone. Big tech have run out of innovations, and they are desperately looking for the next thing to sell. It happened with the metaverse and it's happening again.
In summary:
Ai hasn't materially improved since the launch of Chatgpt4, which wasn't that big of an upgrade to 3.
There is currently no technological roadmap for ai to become better than it is. (As Jim Covello said on the Goldman Sachs report, the evolution of smartphones was openly planned years ahead of time.) The current problems are inherent to the current technology and nobody has indicated there is any way to solve them in the pipeline. We have likely reached the limits of what LLMs can do, and they still can't do much.
Don't believe AI companies when they say things are going to improve from where they are now before they provide evidence. It's time for the AI shills to put up, or shut up.
5K notes · View notes
kenyatta · 2 years ago
Quote
I once had ChatGPT insist that a particular composer wrote music for a game, even going so far as to list particular songs from the soundtrack that they were supposedly responsible for, and it helpfully provided hallucinatory citations when I asked for them (a broken link on the game publisher's website and a link to Wikipedia, which did not in fact support its assertion either now or at any point in the article's history). Nor could I find anywhere else on the internet where someone even mistakenly believed that that composer had worked on the game. ChatGPT lies not because it's regurgitating falsehoods that it found on the internet - it lies because it invents new falsehoods on its own. It's not just trained on stuff on the internet that's wrong; it's trained to be confidently wrong in general. It doesn't know what facts are, it just knows how to produce things that are shaped like facts and shove them in fact-shaped holes. I personally wasted 30 minutes of my life fact-checking/"not believing everything it says", when it confidently told me something surprising. My horizons were not broadened by exposing me to "different worldviews". This was unequivocally a negative experience for me.
comment on a MetaFilter post about AI: "My goal is to be helpful, harmless, and honest."
15K notes · View notes
gynoidgearhead · 7 months ago
Text
we need to come up for a good word for ""AI"" that doesn't imply it's artificial or intelligent and highlights the stolen human labor. like what if we call it "theftgen"
(workshop this with me)
1K notes · View notes
prokopetz · 2 years ago
Text
With all the effort they're putting into making sure that ChatGPT never says or does anything even the tiniest bit unmarketable I give it even odds that within two years we end up with a situation where you ask it the wrong sort of question and it automatically calls the cops.
5K notes · View notes
tumbler-polls · 12 days ago
Text
Tumblr media
183 notes · View notes
mostlysignssomeportents · 2 years ago
Text
The AI hype bubble is the new crypto hype bubble
Tumblr media
Back in 2017 Long Island Ice Tea — known for its undistinguished, barely drinkable sugar-water — changed its name to “Long Blockchain Corp.” Its shares surged to a peak of 400% over their pre-announcement price. The company announced no specific integrations with any kind of blockchain, nor has it made any such integrations since.
If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/03/09/autocomplete-worshippers/#the-real-ai-was-the-corporations-that-we-fought-along-the-way
LBCC was subsequently delisted from NASDAQ after settling with the SEC over fraudulent investor statements. Today, the company trades over the counter and its market cap is $36m, down from $138m.
https://cointelegraph.com/news/textbook-case-of-crypto-hype-how-iced-tea-company-went-blockchain-and-failed-despite-a-289-percent-stock-rise
The most remarkable thing about this incredibly stupid story is that LBCC wasn’t the peak of the blockchain bubble — rather, it was the start of blockchain’s final pump-and-dump. By the standards of 2022’s blockchain grifters, LBCC was small potatoes, a mere $138m sugar-water grift.
They didn’t have any NFTs, no wash trades, no ICO. They didn’t have a Superbowl ad. They didn’t steal billions from mom-and-pop investors while proclaiming themselves to be “Effective Altruists.” They didn’t channel hundreds of millions to election campaigns through straw donations and other forms of campaing finance frauds. They didn’t even open a crypto-themed hamburger restaurant where you couldn’t buy hamburgers with crypto:
https://robbreport.com/food-drink/dining/bored-hungry-restaurant-no-cryptocurrency-1234694556/
They were amateurs. Their attempt to “make fetch happen” only succeeded for a brief instant. By contrast, the superpredators of the crypto bubble were able to make fetch happen over an improbably long timescale, deploying the most powerful reality distortion fields since Pets.com.
Anything that can’t go on forever will eventually stop. We’re told that trillions of dollars’ worth of crypto has been wiped out over the past year, but these losses are nowhere to be seen in the real economy — because the “wealth” that was wiped out by the crypto bubble’s bursting never existed in the first place.
Like any Ponzi scheme, crypto was a way to separate normies from their savings through the pretense that they were “investing” in a vast enterprise — but the only real money (“fiat” in cryptospeak) in the system was the hardscrabble retirement savings of working people, which the bubble’s energetic inflaters swapped for illiquid, worthless shitcoins.
We’ve stopped believing in the illusory billions. Sam Bankman-Fried is under house arrest. But the people who gave him money — and the nimbler Ponzi artists who evaded arrest — are looking for new scams to separate the marks from their money.
Take Morganstanley, who spent 2021 and 2022 hyping cryptocurrency as a massive growth opportunity:
https://cointelegraph.com/news/morgan-stanley-launches-cryptocurrency-research-team
Today, Morganstanley wants you to know that AI is a $6 trillion opportunity.
They’re not alone. The CEOs of Endeavor, Buzzfeed, Microsoft, Spotify, Youtube, Snap, Sports Illustrated, and CAA are all out there, pumping up the AI bubble with every hour that god sends, declaring that the future is AI.
https://www.hollywoodreporter.com/business/business-news/wall-street-ai-stock-price-1235343279/
Google and Bing are locked in an arms-race to see whose search engine can attain the speediest, most profound enshittification via chatbot, replacing links to web-pages with florid paragraphs composed by fully automated, supremely confident liars:
https://pluralistic.net/2023/02/16/tweedledumber/#easily-spooked
Blockchain was a solution in search of a problem. So is AI. Yes, Buzzfeed will be able to reduce its wage-bill by automating its personality quiz vertical, and Spotify’s “AI DJ” will produce slightly less terrible playlists (at least, to the extent that Spotify doesn’t put its thumb on the scales by inserting tracks into the playlists whose only fitness factor is that someone paid to boost them).
But even if you add all of this up, double it, square it, and add a billion dollar confidence interval, it still doesn’t add up to what Bank Of America analysts called “a defining moment — like the internet in the ’90s.” For one thing, the most exciting part of the “internet in the ‘90s” was that it had incredibly low barriers to entry and wasn’t dominated by large companies — indeed, it had them running scared.
The AI bubble, by contrast, is being inflated by massive incumbents, whose excitement boils down to “This will let the biggest companies get much, much bigger and the rest of you can go fuck yourselves.” Some revolution.
AI has all the hallmarks of a classic pump-and-dump, starting with terminology. AI isn’t “artificial” and it’s not “intelligent.” “Machine learning” doesn’t learn. On this week’s Trashfuture podcast, they made an excellent (and profane and hilarious) case that ChatGPT is best understood as a sophisticated form of autocomplete — not our new robot overlord.
https://open.spotify.com/episode/4NHKMZZNKi0w9mOhPYIL4T
We all know that autocomplete is a decidedly mixed blessing. Like all statistical inference tools, autocomplete is profoundly conservative — it wants you to do the same thing tomorrow as you did yesterday (that’s why “sophisticated” ad retargeting ads show you ads for shoes in response to your search for shoes). If the word you type after “hey” is usually “hon” then the next time you type “hey,” autocomplete will be ready to fill in your typical following word — even if this time you want to type “hey stop texting me you freak”:
https://blog.lareviewofbooks.org/provocations/neophobic-conservative-ai-overlords-want-everything-stay/
And when autocomplete encounters a new input — when you try to type something you’ve never typed before — it tries to get you to finish your sentence with the statistically median thing that everyone would type next, on average. Usually that produces something utterly bland, but sometimes the results can be hilarious. Back in 2018, I started to text our babysitter with “hey are you free to sit” only to have Android finish the sentence with “on my face” (not something I’d ever typed!):
https://mashable.com/article/android-predictive-text-sit-on-my-face
Modern autocomplete can produce long passages of text in response to prompts, but it is every bit as unreliable as 2018 Android SMS autocomplete, as Alexander Hanff discovered when ChatGPT informed him that he was dead, even generating a plausible URL for a link to a nonexistent obit in The Guardian:
https://www.theregister.com/2023/03/02/chatgpt_considered_harmful/
Of course, the carnival barkers of the AI pump-and-dump insist that this is all a feature, not a bug. If autocomplete says stupid, wrong things with total confidence, that’s because “AI” is becoming more human, because humans also say stupid, wrong things with total confidence.
Exhibit A is the billionaire AI grifter Sam Altman, CEO if OpenAI — a company whose products are not open, nor are they artificial, nor are they intelligent. Altman celebrated the release of ChatGPT by tweeting “i am a stochastic parrot, and so r u.”
https://twitter.com/sama/status/1599471830255177728
This was a dig at the “stochastic parrots” paper, a comprehensive, measured roundup of criticisms of AI that led Google to fire Timnit Gebru, a respected AI researcher, for having the audacity to point out the Emperor’s New Clothes:
https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/
Gebru’s co-author on the Parrots paper was Emily M Bender, a computational linguistics specialist at UW, who is one of the best-informed and most damning critics of AI hype. You can get a good sense of her position from Elizabeth Weil’s New York Magazine profile:
https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
Bender has made many important scholarly contributions to her field, but she is also famous for her rules of thumb, which caution her fellow scientists not to get high on their own supply:
Please do not conflate word form and meaning
Mind your own credulity
As Bender says, we’ve made “machines that can mindlessly generate text, but we haven’t learned how to stop imagining the mind behind it.” One potential tonic against this fallacy is to follow an Italian MP’s suggestion and replace “AI” with “SALAMI” (“Systematic Approaches to Learning Algorithms and Machine Inferences”). It’s a lot easier to keep a clear head when someone asks you, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?”
Bender’s most famous contribution is the “stochastic parrot,” a construct that “just probabilistically spits out words.” AI bros like Altman love the stochastic parrot, and are hellbent on reducing human beings to stochastic parrots, which will allow them to declare that their chatbots have feature-parity with human beings.
At the same time, Altman and Co are strangely afraid of their creations. It’s possible that this is just a shuck: “I have made something so powerful that it could destroy humanity! Luckily, I am a wise steward of this thing, so it’s fine. But boy, it sure is powerful!”
They’ve been playing this game for a long time. People like Elon Musk (an investor in OpenAI, who is hoping to convince the EU Commission and FTC that he can fire all of Twitter’s human moderators and replace them with chatbots without violating EU law or the FTC’s consent decree) keep warning us that AI will destroy us unless we tame it.
There’s a lot of credulous repetition of these claims, and not just by AI’s boosters. AI critics are also prone to engaging in what Lee Vinsel calls criti-hype: criticizing something by repeating its boosters’ claims without interrogating them to see if they’re true:
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5
There are better ways to respond to Elon Musk warning us that AIs will emulsify the planet and use human beings for food than to shout, “Look at how irresponsible this wizard is being! He made a Frankenstein’s Monster that will kill us all!” Like, we could point out that of all the things Elon Musk is profoundly wrong about, he is most wrong about the philosophical meaning of Wachowksi movies:
https://www.theguardian.com/film/2020/may/18/lilly-wachowski-ivana-trump-elon-musk-twitter-red-pill-the-matrix-tweets
But even if we take the bros at their word when they proclaim themselves to be terrified of “existential risk” from AI, we can find better explanations by seeking out other phenomena that might be triggering their dread. As Charlie Stross points out, corporations are Slow AIs, autonomous artificial lifeforms that consistently do the wrong thing even when the people who nominally run them try to steer them in better directions:
https://media.ccc.de/v/34c3-9270-dude_you_broke_the_future
Imagine the existential horror of a ultra-rich manbaby who nominally leads a company, but can’t get it to follow: “everyone thinks I’m in charge, but I’m actually being driven by the Slow AI, serving as its sock puppet on some days, its golem on others.”
Ted Chiang nailed this back in 2017 (the same year of the Long Island Blockchain Company):
There’s a saying, popularized by Fredric Jameson, that it’s easier to imagine the end of the world than to imagine the end of capitalism. It’s no surprise that Silicon Valley capitalists don’t want to think about capitalism ending. What’s unexpected is that the way they envision the world ending is through a form of unchecked capitalism, disguised as a superintelligent AI. They have unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own.
https://www.buzzfeednews.com/article/tedchiang/the-real-danger-to-civilization-isnt-ai-its-runaway
Chiang is still writing some of the best critical work on “AI.” His February article in the New Yorker, “ChatGPT Is a Blurry JPEG of the Web,” was an instant classic:
[AI] hallucinations are compression artifacts, but — like the incorrect labels generated by the Xerox photocopier — they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world.
https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
“AI” is practically purpose-built for inflating another hype-bubble, excelling as it does at producing party-tricks — plausible essays, weird images, voice impersonations. But as Princeton’s Matthew Salganik writes, there’s a world of difference between “cool” and “tool”:
https://freedom-to-tinker.com/2023/03/08/can-chatgpt-and-its-successors-go-from-cool-to-tool/
Nature can claim “conversational AI is a game-changer for science” but “there is a huge gap between writing funny instructions for removing food from home electronics and doing scientific research.” Salganik tried to get ChatGPT to help him with the most banal of scholarly tasks — aiding him in peer reviewing a colleague’s paper. The result? “ChatGPT didn’t help me do peer review at all; not one little bit.”
The criti-hype isn’t limited to ChatGPT, of course — there’s plenty of (justifiable) concern about image and voice generators and their impact on creative labor markets, but that concern is often expressed in ways that amplify the self-serving claims of the companies hoping to inflate the hype machine.
One of the best critical responses to the question of image- and voice-generators comes from Kirby Ferguson, whose final Everything Is a Remix video is a superb, visually stunning, brilliantly argued critique of these systems:
https://www.youtube.com/watch?v=rswxcDyotXA
One area where Ferguson shines is in thinking through the copyright question — is there any right to decide who can study the art you make? Except in some edge cases, these systems don’t store copies of the images they analyze, nor do they reproduce them:
https://pluralistic.net/2023/02/09/ai-monkeys-paw/#bullied-schoolkids
For creators, the important material question raised by these systems is economic, not creative: will our bosses use them to erode our wages? That is a very important question, and as far as our bosses are concerned, the answer is a resounding yes.
Markets value automation primarily because automation allows capitalists to pay workers less. The textile factory owners who purchased automatic looms weren’t interested in giving their workers raises and shorting working days. ‘ They wanted to fire their skilled workers and replace them with small children kidnapped out of orphanages and indentured for a decade, starved and beaten and forced to work, even after they were mangled by the machines. Fun fact: Oliver Twist was based on the bestselling memoir of Robert Blincoe, a child who survived his decade of forced labor:
https://www.gutenberg.org/files/59127/59127-h/59127-h.htm
Today, voice actors sitting down to record for games companies are forced to begin each session with “My name is ______ and I hereby grant irrevocable permission to train an AI with my voice and use it any way you see fit.”
https://www.vice.com/en/article/5d37za/voice-actors-sign-away-rights-to-artificial-intelligence
Let’s be clear here: there is — at present — no firmly established copyright over voiceprints. The “right” that voice actors are signing away as a non-negotiable condition of doing their jobs for giant, powerful monopolists doesn’t even exist. When a corporation makes a worker surrender this right, they are betting that this right will be created later in the name of “artists’ rights” — and that they will then be able to harvest this right and use it to fire the artists who fought so hard for it.
There are other approaches to this. We could support the US Copyright Office’s position that machine-generated works are not works of human creative authorship and are thus not eligible for copyright — so if corporations wanted to control their products, they’d have to hire humans to make them:
https://www.theverge.com/2022/2/21/22944335/us-copyright-office-reject-ai-generated-art-recent-entrance-to-paradise
Or we could create collective rights that belong to all artists and can’t be signed away to a corporation. That’s how the right to record other musicians’ songs work — and it’s why Taylor Swift was able to re-record the masters that were sold out from under her by evil private-equity bros::
https://doctorow.medium.com/united-we-stand-61e16ec707e2
Whatever we do as creative workers and as humans entitled to a decent life, we can’t afford drink the Blockchain Iced Tea. That means that we have to be technically competent, to understand how the stochastic parrot works, and to make sure our criticism doesn’t just repeat the marketing copy of the latest pump-and-dump.
Today (Mar 9), you can catch me in person in Austin at the UT School of Design and Creative Technologies, and remotely at U Manitoba’s Ethics of Emerging Tech Lecture.
Tomorrow (Mar 10), Rebecca Giblin and I kick off the SXSW reading series.
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
[Image ID: A graph depicting the Gartner hype cycle. A pair of HAL 9000's glowing red eyes are chasing each other down the slope from the Peak of Inflated Expectations to join another one that is at rest in the Trough of Disillusionment. It, in turn, sits atop a vast cairn of HAL 9000 eyes that are piled in a rough pyramid that extends below the graph to a distance of several times its height.]
2K notes · View notes
herigo · 1 year ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media
497 notes · View notes
sag-dab-sar · 4 months ago
Text
Clarification: Generative AI does not equal all AI
💭 "Artificial Intelligence"
AI is machine learning, deep learning, natural language processing, and more that I'm not smart enough to know. It can be extremely useful in many different fields and technologies. One of my information & emergency management courses described the usage of AI as being a "human centaur". Part human part machine; meaning AI can assist in all the things we already do and supplement our work by doing what we can't.
💭 Examples of AI Benefits
AI can help advance things in all sorts of fields, here are some examples:
Emergency Healthcare & Disaster Risk X
Disaster Response X
Crisis Resilience Management X
Medical Imaging Technology X
Commercial Flying X
Air Traffic Control X
Railroad Transportation X
Ship Transportation X
Geology X
Water Conservation X
Can AI technology be used maliciously? Yeh. Thats a matter of developing ethics and working to teach people how to see red flags just like people see red flags in already existing technology.
AI isn't evil. Its not the insane sentient shit that wants to kill us in movies. And it is not synonymous with generative AI.
💭 Generative AI
Generative AI does use these technologies, but it uses them unethically. Its scraps data from all art, all writing, all videos, all games, all audio anything it's developers give it access to WITHOUT PERMISSION, which is basically free reign over the internet. Sometimes with certain restrictions, often generative AI engineers—who CAN choose to exclude things—may exclude extremist sites or explicit materials usually using black lists.
AI can create images of real individuals without permission, including revenge porn. Create music using someones voice without their permission and then sell that music. It can spread disinformation faster than it can be fact checked, and create false evidence that our court systems are not ready to handle.
AI bros eat it up without question: "it makes art more accessible" , "it'll make entertainment production cheaper" , "its the future, evolve!!!"
💭 AI is not similar to human thinking
When faced with the argument "a human didn't make it" the come back is "AI learns based on already existing information, which is exactly what humans do when producing art! We ALSO learn from others and see thousands of other artworks"
Lets make something clear: generative AI isn't making anything original. It is true that human beings process all the information we come across. We observe that information, learn from it, process it then ADD our own understanding of the world, our unique lived experiences. Through that information collection, understanding, and our own personalities we then create new original things.
💭 Generative AI doesn't create things: it mimics things
Take an analogy:
Consider an infant unable to talk but old enough to engage with their caregivers, some point in between 6-8 months old.
Mom: a bird flaps its wings to fly!!! *makes a flapping motion with arm and hands*
Infant: *giggles and makes a flapping motion with arms and hands*
The infant does not understand what a bird is, what wings are, or the concept of flight. But she still fully mimicked the flapping of the hands and arms because her mother did it first to show her. She doesn't cognitively understand what on earth any of it means, but she was still able to do it.
In the same way, generative AI is the infant that copies what humans have done— mimicry. Without understanding anything about the works it has stolen.
Its not original, it doesn't have a world view, it doesn't understand emotions that go into the different work it is stealing, it's creations have no meaning, it doesn't have any motivation to create things it only does so because it was told to.
Why read a book someone isn't even bothered to write?
Related videos I find worth a watch
ChatGPT's Huge Problem by Kyle Hill (we don't understand how AI works)
Criticism of Shadiversity's "AI Love Letter" by DeviantRahll
AI Is Ruining the Internet by Drew Gooden
AI vs The Law by Legal Eagle (AI & US Copyright)
AI Voices by Tyler Chou (Short, flash warning)
Dead Internet Theory by Kyle Hill
-Dyslexia, not audio proof read-
58 notes · View notes
queerism1969 · 1 year ago
Text
Tumblr media
294 notes · View notes
nixcraft · 2 years ago
Text
Tumblr media
496 notes · View notes
voidsweirdthoughts · 23 days ago
Text
TO ALL ARTISTS!!
If you don’t support AI generated stuff, plz read this!!
There’s this thing called Nightshade that makes you poison AI every time it tries to steal the drawing you put in the program, so their “art” gets damaged and makes the results rlly bad >:3
Many people still think that AI is good, and are confused of why many people hate it, so let me explain:
1: AI steals people’s jobs
Artists, writers, musicians, trip planners,.. these people who enjoy what they do so much they decide they want to do it for the rest of their life, get their dreams completely discarded by others, justifying it by saying “it is a cheap and accesible way to “create” art/ music/ ”, while the ones who get hurt both financially and emotionally are the creators of such pieces
2: Art is human
Art is basically one of the few things that only humans can do. It can reflect our emotions, and its beauty comes from the heart of someone who enjoys what they do. By supporting AI, you’re supporting robots with no feelings or emotions who are programmed to steal that form of showing how you feel, and all that effort you put into making it, only for “growing in technology” or whatever, which leads us to the next point:
3: Art takes effort
Producing music (writing, playing an instrument, singing,..), drawing (in digital, 3d, traditional,..), writing (studying ortography, structure, etc + writing characters, places, plot,..), and all other forms of art also share something: The effort. The time it takes to study, practice, find a style, perfection it, all that way, takes a whole life. A whole life of making what you like. For it to be taken away by a machine in a few seconds of “loading audio/image/text..”
There are many more reasons that I can’t cover right now, if anyone who can write more sees this, please reblog it saying so. The more, the better :)
If you’re going to use it, I’d reccomend you to post it in twitter (since yk what happens there -_-), but if you do it on tumblr, don’t reblog so they don’t suspect or however tumblr works with these things. For this to work, go to settings in tumblr and activate the permission for AI to use your work (I don’t reccomend you to do it in tumblr though, it takes ALL of your artwork and writing, not only the infected one, so be aware)
But if you’re not going to and look forward to it, plz spread the word!!
Thx for your time and have a nice day ^^
31 notes · View notes
Anyway if you are one of those people who genuinely believes ai creations are a good thing then that suggests you have never produced a creative work worth protecting from theft and corporate-sponsored irrelevance, ever, and/or that you do not give a shit about one. single. person who has. And that is so, so much sadder and says so much more about you than any stupid jokes you could make about creators trying to protect their work being Luddites or whatever.
343 notes · View notes
kingly-genderfluid · 1 year ago
Text
Well.
I have a shit relationship with my dad. He openly expresses how AI can AND WILL, replace writers.
Which is horrid, considering that his “straight catholic and cisgender female” daughter is currently working on a book of their own.
he invited me for dinner, and conversation got…
heated.
mom and little brother stayed out of it-wisely so.
so he openly tells me, “Look, I can openly type out a prompt, and Chat GPT will just write it for me.”
I should have kept a cool head, but I yelled. I think a neighbour heard me.
I kept shutting down his remarks about how my writing wouldn’t matter because of AI.
so. I’m at war. A very. Very. Important war. For the future of all writers, and artists alike. If I don’t start shutting off this thinking from my own home, the world will begin believing that AI can solve HUMAN PROBLEMS. AI leeches off what humans create. What we worked so hard to do. The hours we took to make people cry, get angry, or be happy.
AI doesn’t even pay workers. AI can’t comfort someone with ALGORITHM. AI can’t do anything without someone already having done it. AI can’t make anything relatable, because it’s a BOT. Since when do BOTS, feel HUMAN emotion.
sometimes the information isn’t even right.
So I’m at war.
I support the writers guild.
I support ANY struggling writer, going against the current of AI.
Fuck it, we ball.
kids who love to write-AND IVE SEEN THESE KIDS WITH A BURNING PASSION FOR THE ART OF LANGUAGE, I WAS ONE OF THEM TOO-they are about to get their dreams to make someone see their vision, their imagination, stripped away from them. Because of bots.
they can assist with creativity, maybe. Fixing up a few mishaps in grammar. That’s all they’re good for. You can’t make a movie without the vision of a human behind it.
watermark your arts.
I don’t know how to protect your fan fiction, but we’re going to do our best.
This is my mini war.
might be a bigger one.
who knows ;)
173 notes · View notes
prokopetz · 1 year ago
Note
I just watched a video where someone is using ChatGPT to generate comments on their code. Even as a layman I feel like I should be screaming at him, but on a scale from 1 to apocalypse, how bad is this?
Machine-generated comments could not possibly be more useless, nonsensical or maliciously misleading than most of the human-generated comments I've seen.
1K notes · View notes
ugackminer · 2 years ago
Text
So, I made a tool to stop AI from stealing from writers
So seeing this post really inspired me in order to make a tool that writers could use in order to make it unreadable to AI.
And it works! You can try out the online demo, and view all of the code that runs it here!
It does more than just mangle text though! It's also able to invisibly hide author and copyright info, so that you can have definitive proof that someone's stealing your works if they're doing a simple copy and paste!
Below is an example of Scrawl in action!
Τо հսⅿаոѕ, 𝗍հᎥꜱ 𝗍ех𝗍 𐌉ο໐𝗄ꜱ ո໐𝗋ⅿаⵏ, 𝖻ս𝗍 𝗍о ᴄоⅿрս𝗍е𝗋ꜱ, Ꭵ𝗍'ѕ սո𝗋еаⅾа𝖻ⵏе!
[Text reads "To humans, this text looks normal, but to computers, it's unreadable!"]
Of course, this "Anti-AI" mode comes with some pretty serious accessibility issues, like breaking screen readers and other TTS software, but there's no real way to make text readable to one AI but not to another AI.
If you're okay with it, you can always have Anti-AI mode off, which will make it so that AIs can understand your text while embedding invisible characters to save your copyright information! (as long as the website you're posting on doesn't remove those characters!)
But, the Anti-AI mode is pretty cool.
Tumblr media
351 notes · View notes